Hierarchical Reinforcement Learning for Robot Navigation

نویسندگان

  • Bastian Bischoff
  • Duy Nguyen-Tuong
  • I-Hsuan Lee
  • Felix Streichert
  • Alois Knoll
چکیده

For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a

منابع مشابه

Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning (RESEARCH NOTE)

In this paper we focus on the application of reinforcement learning to obstacle avoidance in dynamic Environments in wireless sensor networks. A distributed algorithm based on reinforcement learning is developed for sensor networks to guide mobile robot through the dynamic obstacles. The sensor network models the danger of the area under coverage as obstacles, and has the property of adoption o...

متن کامل

A Constructive Connectionist Approach Towards Continual Robot Learning

This work presents an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. The approach is used in a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-action rul...

متن کامل

Continual Robot Learning withConstructive Neural

In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-a...

متن کامل

Cooperation Learning for Behaviour-based Neural-fuzzy Controller in Robot Navigation

Based on the previously proposed extended neural-fuzzy network, this paper presents a cooperation scheme of training data based learning and reinforcement learning for constructing sensor-based behaviour modules in robot navigation. In order to solve reinforcement learning problem, a reinforcement-based neural-fuzzy control system (RNFCS) is provided, which consists of a neural-fuzzy controller...

متن کامل

Robot Navigation in Partially Observable Domains using Hierarchical Memory-Based Reinforcement Learning

 In this paper, we attempt to find a solution to the problem of robot navigation in a domain with partial observability. The domain is a grid-world with intersecting corridors, where the agent learns an optimal policy for navigation by making use of a hierarchical memory-based learning algorithm. We define a hierarchy of levels over which the agent abstracts the learning process, as well as it...

متن کامل

Continual Robot Learning with Constructive Neural Networks

In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incre-mental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

متن کامل
عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013